Active Learning in Multilayer Perceptrons
نویسنده
چکیده
We propose an active learning method with hidden-unit reduction, which is devised specially for multilayer perceptrons (MLP). First, we review our active learning method, and point out that many Fisher-information-based methods applied to MLP have a critical problem: the information matrix may be singular. To solve this problem, we derive the singularity condition of an information matrix, and propose an active learning technique that is applicable to MLP. Its e ectiveness is veri ed through experiments.
منابع مشابه
An Active Learning Algorithm Based on Existing Training Data
A multilayer perceptron is usually considered a passive learner that only receives given training data. However, if a multilayer perceptron actively gathers training data that resolve its uncertainty about a problem being learnt, sufficiently accurate classification is attained with fewer training data. Recently, such active learning has been receiving an increasing interest. In this paper, we ...
متن کاملFukumizu : Statistical Active Learning in Multilayer Perceptron 3
|This paper proposes new methods of generating input locations actively in gathering training data, aiming at solving problems special to multilayer perceptrons. One of the problems is that the optimum input locations which are calculated deterministically sometimes result in badly-distributed data and cause local minima in back-propagation training. Two probabilistic active learning methods, w...
متن کاملDiscrete All-positive Multilayer Perceptrons for Optical Implementation Discrete All-positive Multilayer Perceptrons for Optical Implementation
All-optical multilayer perceptrons diier in various ways from the ideal neural network model. Examples are the use of non-ideal activation functions which are truncated, asymmetric, and have a non-standard gain, restriction of the network parameters to non-negative values, and the limited accuracy of the weights. In this paper, a backpropagation-based learning rule is presented that compensates...
متن کامل4 . Multilayer perceptrons and back - propagation
Multilayer feed-forward networks, or multilayer perceptrons (MLPs) have one or several " hidden " layers of nodes. This implies that they have two or more layers of weights. The limitations of simple perceptrons do not apply to MLPs. In fact, as we will see later, a network with just one hidden layer can represent any Boolean function (including the XOR which is, as we saw, not linearly separab...
متن کاملOn Langevin Updating in Multilayer Perceptrons
The Langevin updating rule, in which noise is added to the weights during learning, is presented and shown to improve learning on problems with initially ill-conditioned Hessians. This is particularly important for multilayer perceptrons with many hidden layers, that often have ill-conditioned Hessians. In addition, Manhattan updating is shown to have a similar eeect.
متن کامل